10 research outputs found

    Predictive Braking With Brake Light Detection-Field Test

    Get PDF
    Driver assistance systems, such as adaptive cruise control, are an increasing commodity in modern vehicles. Our earlier experience of radar-based adaptive cruise control has indicated repeatable abrupt behavior when approaching a stopped vehicle at high speed, which is typical for extra-urban roads. Abrupt behavior in assisted driving not only decreases the passenger trust but also reduces the comfort levels of such systems. We present a design and proof-of-concept of a machine vision-enhanced adaptive cruise controller. A machine vision-based brake light detection system was implemented and tested in order to smoothen the transition from coasting to braking and ensure speed reduction early enough. The machine vision system detects the brake lights in front, then transmits a command to the cruise controller to reduce speed. The current paper reports the speed control system design and experiments carried out to validate the system. The experiments showed the system works as designed by reducing abrupt behavior. Measurements show that brake light-assisted cruise control was able to start deceleration about three seconds earlier than a cruise controller without brake light detection. Measurements also showed increased ride comfort with the maximum deceleration and minimum jerk levels improving from 5% to 31%.Peer reviewe

    Brake Light Detection Algorithm for Predictive Braking

    Get PDF
    There has recently been a rapid increase in the number of partially automated systems in passenger vehicles. This has necessitated a greater focus on the effect the systems have on the comfort and trust of passengers. One significant issue is the delayed detection of stationary or harshly braking vehicles. This paper proposes a novel brake light detection algorithm in order to improve ride comfort. The system uses a camera and YOLOv3 object detector to detect the bounding boxes of the vehicles ahead of the ego vehicle. The bounding boxes are preprocessed with L*a*b colorspace thresholding. Thereafter, the bounding boxes are resized to a 30 × 30 pixel resolution and fed into a random forest algorithm. The novel detection system was evaluated using a dataset collected in the Helsinki metropolitan area in varying conditions. Carried out experiments revealed that the new algorithm reaches a high accuracy of 81.8%. For comparison, using the random forest algorithm alone produced an accuracy of 73.4%, thus proving the value of the preprocessing stage. Furthermore, a range test was conducted. It was found that with a suitable camera, the algorithm can reliably detect lit brake lights even up to a distance of 150 m

    Brake Light Detection Algorithm for Predictive Braking

    Get PDF
    There has recently been a rapid increase in the number of partially automated systems in passenger vehicles. This has necessitated a greater focus on the effect the systems have on the comfort and trust of passengers. One significant issue is the delayed detection of stationary or harshly braking vehicles. This paper proposes a novel brake light detection algorithm in order to improve ride comfort. The system uses a camera and YOLOv3 object detector to detect the bounding boxes of the vehicles ahead of the ego vehicle. The bounding boxes are preprocessed with L*a*b colorspace thresholding. Thereafter, the bounding boxes are resized to a 30 × 30 pixel resolution and fed into a random forest algorithm. The novel detection system was evaluated using a dataset collected in the Helsinki metropolitan area in varying conditions. Carried out experiments revealed that the new algorithm reaches a high accuracy of 81.8%. For comparison, using the random forest algorithm alone produced an accuracy of 73.4%, thus proving the value of the preprocessing stage. Furthermore, a range test was conducted. It was found that with a suitable camera, the algorithm can reliably detect lit brake lights even up to a distance of 150 m

    Kuljettajaa avustavat järjestelmät ja niiden väyläkommunikaatio ajoneuvossa

    No full text
    Driver assistant systems develop with fast pace and due to this the principles and operations get more complex. Even though the development happens at a fast pace there is still rather little knowledge about the system operations and network communication available in public. In order to use the already existing data in the vehicle networks with concepts like vehicle-to-everything (V2X), the communications need to be studied in individual cases. The objective of this study was to find a good solution to connect to the research vehicle network buses and to analyze the communication in the network. For this purpose, a device was designed to be able to log network data and driving conditions into a video format. The software is designed for this purpose so that captured data can be easily used in multiple different applications. During the development an agile methodology is used, so that as much as possible can be done in a considerably short time window. As a result of the study a modular development device was designed, the device can record a camera feed with a matching timestamp to the recorded log file. In addition, some of the research vehicle systems were successfully controlled with the device, cruise control system and radio system. Analysis of the network messages revealed pedal position, steering wheel position, door states, etc.Kuljettajaa avustavat järjestelmät kehittyvät jatkuvasti ja niiden rakenne, sekä toiminta monimutkaistuu kehityksen edetessä. Väyläkommunikaatiosta, sekä järjestelmistä yksittäisissä tapauksissa ei kuitenkaan ole julkisesti saatavilla tarpeeksi tietoa, että esimerkiksi tunnistimien kerättyä dataa voitaisiin järkevästi hyväksikäyttää esimerkiksi verkottuneiden ajoneuvojärjestelmien tukena. Tämän tutkimuksen tarkoituksena on löytää järkevä tapa kytkeytyä tutkimusauton väyliin ja analysoida väyläkommunikaatiota. Tähän käyttötarkoitukseen kehitetään itse laite, jolla voidaan tallentaa väyläkommunikaatio, sekä mahdollinen ajosuorite videomuodossa. Ohjelmisto luodaan itse, jotta analysoitujen väyläviestien sisältöä voidaan jatkossa käyttää useampaan eri applikaatioon. Kehitysvaiheiden läpi noudatetaan ketterää ohjelmistokehityksen (engl. agile software development) periaatetta, jotta suhteellisen pienessä aikaikkunassa voidaan testata, sekä kehittää mahdollisimman toimiva ratkaisu. Työn tuloksena kehitettiin modulaarinen työkalu, jolla voidaan tallentaa videota ja väyläkommunikaatiota. Tämän lisäksi osaa ajoneuvon järjestelmistä ohjattiin onnistuneesti, näistä esimerkkinä tutkimusauton vakionopeus järjestelmän ohjaus, sekä radion ohjaus. Muitakin järjestelmiä voidaan mahdollisesti tulevaisuudessa ohjata jatkamalla tutkimusta väyläkommunikaatiosta tässä kyseisessä ajoneuvossa. Ohjauksen lisäksi ajoneuvosta löydettiin muun muassa polkimien toteutunut ohjaus, ohjauspyörän toteutunut asento, ovien avaustieto, yms

    CAN-väyläsimulaattori

    Get PDF
    Tämän insinöörityön aiheena oli rakentaa CAN-väyläsimulaattori Robert Bosch Oy:n koulutuskäyttöön. Työn tilaajan projektille asettamat vaatimukset olivat laitteen realistisuus, yksinkertaisuus sekä helppo mukanakuljetettavuus. CAN-väyläsimulaattoria on tarkoitus käyttää vikatilojen simulointiin ja siten ajoneuvomekaanikkojen täydennyskoulutukseen. Lisäksi laadittiin koulutuspaketti, johon sisältyy teoriaosuus ja CAN-väyläsimulaattorilla tehtävät harjoitustyöt. Laite rakennettiin käyttäen kahta Arduino UNO -kehitysalustaa, joille suunniteltiin CAN-lähetin-vastaanotinpiirikortit. Ohjelmakoodit kirjoitettiin käyttäen Arduino IDE -ohjelmistoa, joka on ilmainen Arduino-kehitysalustojen ohjelmointiin suunniteltu ohjelmointiympäristö. Projektin toteuttamiseen käytettiin myös toista ilmaisohjelmaa, EAGLE PCB, jolla suunniteltiin piirikortit. Projektin lopputuotteena syntyi toimiva CAN-väyläsimulaattori sekä sen tueksi laadittu koulutuspaketti. Työn tilaajan vaatimukset täytettiin ja laite on sijoitettu salkkuun, jota on helppo kantaa mukana. Vikatilat simuloidaan käyttäen laitteeseen asennettua näyttöä, jossa on ohjelmallisesti toteutettu valikko. Laitteita on tarkoitus toimittaa työn tilaajalle neljä kappaletta.The final project was commissioned by Robert Bosch GmbH and the aim was to design and build a fault simulator for a vehicle can bus and produce training material for CAN bus fault diagnostics. The commissioning company set demands for the features of the fault simulator, features such as realistic behavior, easy usage and easy transportation. The work started by getting acquainted with the theory of CAN bus, Arduino development boards and research considering CAN bus transceivers. The work was carried out by using two Arduino UNO development boards, which were programmed using the Arduino IDE development environment. The work was finalized with a self-made CAN bus module circuits, a carrying case and an LCD-display. The result of the work was a working, realistic and easy to transport CAN bus fault simulator and training material, which can be used as tools for teaching vehicle mechanics. In the near future it is possible that the commissioning company will order several finalized fault simulators. Helsinki Metropolia University of Applied Sciences has also shown interest in the product

    Infrastructure camera calibration with GNSS for vehicle localisation

    No full text
    Intelligent transportation and smart city applications are currently on the rise. In many applications, diverse and accurate sensor perception of vehicles is crucial. Relevant information could be conveniently acquired with traffic cameras, as there is an abundance of cameras in cities. However, cameras have to be calibrated in order to acquire position data of vehicles. This paper proposes a novel automated calibration approach for partially connected vehicle environments. The approach utilises Global Navigation Satellite System positioning information shared by connected vehicles. Corresponding vehicle Global Navigation Satellite System locations and image coordinates are utilised to fit a direct transformation between image and ground plane coordinates. The proposed approach was validated with a research vehicle equipped with a Real-Time Kinematic-corrected Global Navigation Satellite System receiver driving past three different cameras. On average, the camera estimates contained errors ranging from 1.5 to2.0 m, when compared to the Global Navigation Satellite System positions of the vehicle. Considering the vast lengths of the overlooked road sections, up to 140 m, the accuracy of the camera-based localisation should be adequate for a number of intelligent transportation applications. In future, the calibration approach should be evaluated with fusion of stand-alone Global Navigation Satellite System positioning and inertial measurements, to validate the calibration methodology with more common vehicle sensor equipment.Peer reviewe

    Classification of Trash and Valuables with Machine Vision in Shared Cars

    No full text
    This study focused on the possibility of implementing a vision-based architecture to monitor and detect the presence of trash or valuables in shared cars. The system was introduced to take pictures of the rear seating area of a four-door passenger car. Image capture was performed with a stationary wide-angled camera unit, and image classification was conducted with a prediction model in a remote server. For classification, a convolutional neural network (CNN) in the form of a fine-tuned VGG16 model was developed. The CNN yielded an accuracy of 91.43% on a batch of 140 test images. To determine the correlation among the predictions, a confusion matrix was used, and in addition, for each predicted image, the certainty of the distinct output classes was examined. The execution time of the system, from capturing an image to displaying the results, ranged from 5.7 to 17.2 s. Misclassifications from the prediction model were observed in the results primarily due to the variation in ambient light levels and shadows within the images, which resulted in the target items lacking contrast with their neighbouring background. Developments pertaining to the modularity of the camera unit and expanding the dataset of training images are suggested for potential future research.Peer reviewe

    Architecture for determining the cleanliness in shared vehicles using an integrated machine vision and indoor air quality-monitoring system

    No full text
    Funding Information: This work is partially supported by EIT Urban Mobility. The funders had no role in study design, data collection, analysis, decision to publish, or preparation of the manuscript. Publisher Copyright: © 2023, The Author(s).In an attempt to mitigate emissions and road traffic, a significant interest has been recently noted in expanding the use of shared vehicles to replace private modes of transport. However, one outstanding issue has been the hesitancy of passengers to use shared vehicles due to the substandard levels of interior cleanliness, as a result of leftover items from previous users. The current research focuses on developing a novel prediction model using computer vision capable of detecting various types of trash and valuables from a vehicle interior in a timely manner to enhance ambience and passenger comfort. The interior state is captured by a stationary wide-angled camera unit located above the seating area. The acquired images are preprocessed to remove unwanted areas and subjected to a convolutional neural network (CNN) capable of predicting the type and location of leftover items. The algorithm was validated using data collected from two research vehicles under varying conditions of light and shadow levels. The experiments yielded an accuracy of 89% over distinct classes of leftover items and an accuracy of 91% among the general classes of trash and valuables. The average execution time was 65 s from image acquisition in the vehicle to displaying the results in a remote server. A custom dataset of 1379 raw images was also made publicly available for future development work. Additionally, an indoor air quality (IAQ) unit capable of detecting specific air pollutants inside the vehicle was implemented. Based on the pilots conducted for air quality monitoring within the vehicle cabin, an IAQ index was derived which corresponded to a 6-level scale in which each level was associated with the explicit state of interior odour. Future work will focus on integrating the two systems (item detection and air quality monitoring) explicitly to produce a discrete level of cleanliness. The current dataset will also be expanded by collecting data from real shared vehicles in operation.Peer reviewe

    Architecture for determining the cleanliness in shared vehicles using an integrated machine vision and indoor air quality-monitoring system

    Get PDF
    In an attempt to mitigate emissions and road traffic, a significant interest has been recently noted in expanding the use of shared vehicles to replace private modes of transport. However, one outstanding issue has been the hesitancy of passengers to use shared vehicles due to the substandard levels of interior cleanliness, as a result of leftover items from previous users. The current research focuses on developing a novel prediction model using computer vision capable of detecting various types of trash and valuables from a vehicle interior in a timely manner to enhance ambience and passenger comfort. The interior state is captured by a stationary wide-angled camera unit located above the seating area. The acquired images are preprocessed to remove unwanted areas and subjected to a convolutional neural network (CNN) capable of predicting the type and location of leftover items. The algorithm was validated using data collected from two research vehicles under varying conditions of light and shadow levels. The experiments yielded an accuracy of 89% over distinct classes of leftover items and an accuracy of 91% among the general classes of trash and valuables. The average execution time was 65 s from image acquisition in the vehicle to displaying the results in a remote server. A custom dataset of 1379 raw images was also made publicly available for future development work. Additionally, an indoor air quality (IAQ) unit capable of detecting specific air pollutants inside the vehicle was implemented. Based on the pilots conducted for air quality monitoring within the vehicle cabin, an IAQ index was derived which corresponded to a 6-level scale in which each level was associated with the explicit state of interior odour. Future work will focus on integrating the two systems (item detection and air quality monitoring) explicitly to produce a discrete level of cleanliness. The current dataset will also be expanded by collecting data from real shared vehicles in operation.Peer reviewe
    corecore